276 research outputs found

    3D shape based reconstruction of experimental data in Diffuse Optical Tomography

    Get PDF
    Diffuse optical tomography (DOT) aims at recovering three-dimensional images of absorption and scattering parameters inside diffusive body based on small number of transmission measurements at the boundary of the body. This image reconstruction problem is known to be an ill-posed inverse problem, which requires use of prior information for successful reconstruction. We present a shape based method for DOT, where we assume a priori that the unknown body consist of disjoint subdomains with different optical properties. We utilize spherical harmonics expansion to parameterize the reconstruction problem with respect to the subdomain boundaries, and introduce a finite element (FEM) based algorithm that uses a novel 3D mesh subdivision technique to describe the mapping from spherical harmonics coefficients to the 3D absorption and scattering distributions inside a unstructured volumetric FEM mesh. We evaluate the shape based method by reconstructing experimental DOT data, from a cylindrical phantom with one inclusion with high absorption and one with high scattering. The reconstruction was monitored, and we found a 87% reduction in the Hausdorff measure between targets and reconstructed inclusions, 96% success in recovering the location of the centers of the inclusions and 87% success in average in the recovery for the volumes

    Nonlinear approach to difference imaging in diffuse optical tomography

    Get PDF
    Difference imaging aims at recovery of the change in the optical properties of a body based on measurements before and after the change. Conventionally, the image reconstruction is based on using difference of the measurements and a linear approximation of the observation model. One of the main benefits of the linearized difference reconstruction is that the approach has a good tolerance to modeling errors, which cancel out partially in the subtraction of the measurements. However, a drawback of the approach is that the difference images are usually only qualitative in nature and their spatial resolution can be weak because they rely on the global linearization of the nonlinear observation model. To overcome the limitations of the linear approach, we investigate a nonlinear approach for difference imaging where the images of the optical parameters before and after the change are reconstructed simultaneously based on the two datasets. We tested the feasibility of the method with simulations and experimental data from a phantom and studied how the approach tolerates modeling errors like domain truncation, optode coupling errors, and domain shape errors

    Fast Gibbs sampling for high-dimensional Bayesian inversion

    Get PDF
    Solving ill-posed inverse problems by Bayesian inference has recently attracted considerable attention. Compared to deterministic approaches, the probabilistic representation of the solution by the posterior distribution can be exploited to explore and quantify its uncertainties. In applications where the inverse solution is subject to further analysis procedures, this can be a significant advantage. Alongside theoretical progress, various new computational techniques allow to sample very high dimensional posterior distributions: In [Lucka2012], a Markov chain Monte Carlo (MCMC) posterior sampler was developed for linear inverse problems with 1\ell_1-type priors. In this article, we extend this single component Gibbs-type sampler to a wide range of priors used in Bayesian inversion, such as general pq\ell_p^q priors with additional hard constraints. Besides a fast computation of the conditional, single component densities in an explicit, parameterized form, a fast, robust and exact sampling from these one-dimensional densities is key to obtain an efficient algorithm. We demonstrate that a generalization of slice sampling can utilize their specific structure for this task and illustrate the performance of the resulting slice-within-Gibbs samplers by different computed examples. These new samplers allow us to perform sample-based Bayesian inference in high-dimensional scenarios with certain priors for the first time, including the inversion of computed tomography (CT) data with the popular isotropic total variation (TV) prior.Comment: submitted to "Inverse Problems

    Identifying common health indicators from paediatric core outcome sets: a systematic review with narrative synthesis using the WHO International Classification of Functioning, Health and Disability

    Get PDF
    Background Indicators of child health have the potential to inform societal conversations, decision-making and prioritisation. Paediatric core outcome sets are an increasingly common way of identifying a minimum set of outcomes for trials within clinical groups. Exploring commonality across existing sets may give insight into universally important and inclusive child health indicators. Methods A search of the Core Outcome Measures in Effectiveness Trial register from 2008 to 2022 was carried out. Eligible articles were those reporting on core outcome sets focused on children and young people aged 0–18 years old. The International Classification of Functioning, Disability and Health (ICF) was used as a framework to categorise extracted outcomes. Information about the involvement of children, young people and their families in the development of sets was also extracted. Results 206 articles were identified, of which 36 were included. 441 unique outcomes were extracted, mapping to 22 outcome clusters present across multiple sets. Medical diagnostic outcomes were the biggest cluster, followed by pain, communication and social interaction, mobility, self-care and school. Children and young people’s views were under-represented across core outcome sets, with only 36% of reviewed studies including them at any stage of development. Conclusions Existing paediatric core outcome sets show overlap in key outcomes, suggesting the potential for generic child health measurement frameworks. It is unclear whether existing sets best reflect health dimensions important to children and young people, and there is a need for better child and young person involvement in health indicator development to address this

    Learning how to understand complexity and deal with sustainability challenges : A framework for a comprehensive approach and its application in university education

    Get PDF
    Sustainability challenges such as climate change, biodiversity loss, poverty and rapid urbanization are complex and strongly interrelated. In order to successfully deal with these challenges, we need comprehensive approaches that integrate knowledge from multiple disciplines and perspectives and emphasize interconnections. In short, they aid in observing matters in a wider perspective without losing an understanding of the details. In order to teach and learn a comprehensive approach, we need to better understand what comprehensive thinking actually is. In this paper, we present a conceptual framework for a comprehensive approach, termed the GHH framework. The framework comprises three dimensions: generalism, holism, and holarchism. It contributes to the academic community's understanding of comprehensive thinking and it can be used for integrating comprehensive thinking into education. Also, practical examples of the application of the framework in university teaching are presented. We argue that an ideal approach to sustainability challenges and complexity in general is a balanced, dialectical combination of comprehensive and differentiative approaches. The current dominance of specialization, or the differentiative approach, in university education calls for a stronger emphasis on comprehensive thinking skills. Comprehensiveness should not be considered as a flawed approach, but should instead be considered as important an aspect in education as specialized and differentiative skills. (C) 2017 Elsevier B.V. All rights reserved.Peer reviewe

    Syntheses, structure, reactivity and species recognition studies of oxo-vanadium(V) and -molybdenum(VI) complexes

    Get PDF
    Alkoxo-rich Schiff-bases of potentially tri-, tetra- and penta-dentate binding capacity, and their sodium tetrahydroborate-reduced derivatives, have been synthesized. Their oxo-vanadium(V) and -molybdenum(VI) complexes were synthesized and characterized using several analytical and spectral techniques including multinuclear NMR spectroscopy and single-crystal X-ray diffraction studies. Eight structurally different types of complexes possessing distorted square-pyramidal, trigonal-bipyramidal and octahedral geometries have been obtained. While (VO)-O-V exhibits dimeric Structures with 2-HOC6H4CH=NC(CH2OH)(3) and 2-HOC6H4CH2-NHC(CH2OH)(3) and related ligands through the formation of a symmetric V2O2 core as a result of bridging of one of the CH2O- groups, Mo O-VI gives only mononuclear complexes even when some unbound CH2OH groups are available and the metal center is co-ordinatively unsaturated. In all the complexes the nitrogen atom from a HC=N or H2CNH group of the ligand occupies a near trans position to the M=O bond. While the Schiff-base ligands act in a tri- and tetra-dentate manner in the vanadium(V) complexes, they are only tridentate in the molybdenum(VI) complexes. Proton NMR spectra in the region of bound CH, provides a signature that helps to differentiate dinuclear from mononuclear complexes. Carbon-13 NMR co-ordination induced shifts of the bound CH, group fit well with the charge on the oxometal species and the terminal or bridging nature of the ligand. The reactivity of the vanadium(V) complexes towards bromination of the dye xylene cyanole was studied. Transmetallation reactions of several preformed metal complexes of 2-HOC6H4CH=NC(CH2OH)(3) with VO3+ were demonstrated as was selective extraction of VO3+ from a mixture of VO(acac)(2)] and MoO2(acac)(2)] using this Schiff base. The unusual selectivity and that of related derivatives for VO3+ is supported by binding constants and the solubility of the final products, and was established through a.c. conductivity measurements. The cis-MoO22+ complexes with alkoxo binding showed an average Mo-O-alk distance of 1.926 Angstrom, a value that is close to that observed in the molybdenum(VI) enzyme dmso reductase (1.92 Angstrom). Several correlations have been drawn based on the data

    Approximate Marginalization of Absorption and Scattering in Fluorescence Diffuse Optical Tomography

    Get PDF
    In fluorescence diffuse optical tomography (fDOT), the reconstruction of the fluorophore concentration inside the target body is usually carried out using a normalized Born approximation model where the measured fluorescent emission data is scaled by measured excitation data. One of the benefits of the model is that it can tolerate inaccuracy in the absorption and scattering distributions that are used in the construction of the forward model to some extent. In this paper, we employ the recently proposed Bayesian approximation error approach to fDOT for compensating for the modeling errors caused by the inaccurately known optical properties of the target in combination with the normalized Born approximation model. The approach is evaluated using a simulated test case with different amount of error in the optical properties. The results show that the Bayesian approximation error approach improves the tolerance of fDOT imaging against modeling errors caused by inaccurately known absorption and scattering of the target

    Testing the Resolving Power of 2-D K^+ K^+ Interferometry

    Get PDF
    Adopting a procedure previously proposed to quantitatively study two-dimensional pion interferometry, an equivalent 2-D chi^2 analysis was performed to test the resolving power of that method when applied to less favorable conditions, i.e., if no significant contribution from long lived resonances is expected, as in kaon interferometry. For that purpose, use is made of the preliminary E859 K^+ K^+ interferometry data from Si+Au collisions at 14.6 AGeV/c. As expected, less sensitivity is achieved in the present case, although it still is possible to distinguish two distinct decoupling geometries. The present analysis seems to favor scenarios with no resonance formation at the AGS energy range, if the preliminary K^+ K^+ data are confirmed. The possible compatibility of data with zero decoupling proper time interval, conjectured by the 3-D experimental analysis, is also investigated and is ruled out when considering more realistic dynamical models with expanding sources. These results, however, clearly evidence the important influence of the time emission interval on the source effective transverse dimensions. Furthermore, they strongly emphasize that the static Gaussian parameterization, commonly used to fit data, cannot be trusted under more realistic conditions, leading to distorted or even wrong interpretation of the source parameters!Comment: 11 pages, RevTeX, 4 Postscript figures include

    Current-spin-density functional study of persistent currents in quantum rings

    Full text link
    We present a numerical study of persistent currents in quantum rings using current spin density functional theory (CSDFT). This formalism allows for a systematic study of the joint effects of both spin, interactions and impurities for realistic systems. It is illustrated that CSDFT is suitable for describing the physical effects related to Aharonov-Bohm phases by comparing energy spectra of impurity-free rings to existing exact diagonalization and experimental results. Further, we examine the effects of a symmetry-breaking impurity potential on the density and current characteristics of the system and propose that narrowing the confining potential at fixed impurity potential will suppress the persistent current in a characteristic way.Comment: 7 pages REVTeX, including 8 postscript figure
    corecore